pixelcnn decoder
Conditional Image Generation with PixelCNN Decoders
This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.
Reviews: Conditional Image Generation with PixelCNN Decoders
The paper solves a significant problem in generative modeling and the paper is quite interesting. However, reviewer feels the current version is not polished well due to several issues in the experimental section. For rebuttal, please focus on the (*), (**), (***) and (***) mentioned in the following paragraphs. Reviewer is willing to change the score if all the concerns are addressed in the rebuttal. Novelty: The proposed model is technically novel in the sense that it explores the conditional modeling with the recent pixel (R/C)NN framework.
Conditional Image Generation with PixelCNN Decoders
Oord, Aaron van den, Kalchbrenner, Nal, Espeholt, Lasse, kavukcuoglu, koray, Vinyals, Oriol, Graves, Alex
This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder.